Moral Emotions for Autonomous Agents
نویسندگان
چکیده
In this chapter we raise some of the moral issues involved in the current development of robotic autonomous agents. Starting from the connection between autonomy and responsibility, we distinguish two sorts of problems: those having to do with guaranteeing that the behavior of the artificial cognitive system is going to fall within the area of the permissible, and those having to do with endowing such systems with whatever abilities are required for engaging in moral interaction. Only in the second case can we speak of full blown autonomy, or moral autonomy. We illustrate the first type of case with Arkin’s proposal of a hybrid architecture for control of military robots. As for the second kind of case, that of full-blown autonomy, we argue that a motivational component is needed, to ground the self-orientation and the pattern of appraisal required, and outline how such motivational component might give rise to interaction in terms of moral emotions. We end suggesting limits to a straightforward analogy between natural and artificial cognitive systems from this standpoint.
منابع مشابه
Agents with a Moral Dimension (Doctoral Consortium)
As argued by [9], moral decision making entails considering alternatives and assessing the pros and cons of their possible consequences for self and others. From the area of affective neuroscience the concept of moral emotions has been introduced [9] and neurobiological findings [7] show that moral emotions are used to judge the adequacy of actions and are central to moral behavior, decision ma...
متن کاملIs It Morally Acceptable for a System to Lie to Persuade Me?
Given the fast rise of increasingly autonomous artificial agents and robots, a key acceptability criterion will be the possible moral implications of their actions. In particular, intelligent persuasive systems (systems designed to influence humans via communication) constitute a highly sensitive topic because of their intrinsically social nature. Still, ethical studies in this area are rare an...
متن کاملA Computational Model for Simulation of Empathy and Moral Behavior
Emotions and feelings are now considered as decisive in the human intelligent decision process. In particular, social emotions would help us to enhance the group and cooperate. It is still a matter of debate the what that motivates biological creatures to cooperate or not with their group. Would all kinds of cooperation hide a selfish interest, or would it exist truly altruism? If we pore over ...
متن کاملThe Liability Problem for Autonomous Artificial Agents
This paper describes and frames a central ethical issue–the liability problem–facing the regulation of artificial computational agents, including artificial intelligence (AI) and robotic systems, as they become increasingly autonomous, and supersede current capabilities. While it frames the issue in legal terms of liability and culpability, these terms are deeply imbued and interconnected with ...
متن کاملInteractively Learning Moral Norms via Analogy
Autonomous systems must consider the moral ramifications of their actions. Moral norms vary among people, posing a challenge for encoding them explicitly in a system. This paper proposes to enable autonomous agents to use analogical reasoning techniques to interactively learn an individual’s morals.
متن کامل